Goto

Collaborating Authors

 weight distribution



eec7fee9a8595ca964b9a11562767345-Supplemental-Conference.pdf

Neural Information Processing Systems

A.1 ModelArchitecture The architecture of the SinGAN used in our paper follows that in [4]. The trade-off parameter in WGAN-GP [3] is set to0.1 for gradient penalty. Adam[5]isadoptedasthe stochastic optimizer with aninitial learning rate of0.0005and adecay factor of0.1after finishing 80% of iterations, and we set the maximum number of training iterations to2,000. C.2 Per-StageWeightDistribution In addition to total weight distribution, the comparison of per-stage weight distribution is also provided.








Uncertainty Reasoning with Photonic Bayesian Machines

Brückerhoff-Plückelmann, F., Borras, H., Hulyal, S. U., Meyer, L., Ji, X., Hu, J., Sun, J., Klein, B., Ebert, F., Dijkstra, J., McRae, L., Schmidt, P., Kippenberg, T. J., Fröning, H., Pernice, W.

arXiv.org Artificial Intelligence

Artificial intelligence (AI) systems increasingly influence safety-critical aspects of society, from medical diagnosis to autonomous mobility, making uncertainty awareness a central requirement for trustworthy AI. We present a photonic Bayesian machine that leverages the inherent randomness of chaotic light sources to enable uncertainty reasoning within the framework of Bayesian Neural Networks. The analog processor features a 1.28 Tbit/s digital interface compatible with PyTorch, enabling probabilistic convolutions processing within 37.5 ps per convolution. We use the system for simultaneous classification and out-of-domain detection of blood cell microscope images and demonstrate reasoning between aleatoric and epistemic uncertainties. The photonic Bayesian machine removes the bottleneck of pseudo random number generation in digital systems, minimizes the cost of sampling for probabilistic models, and thus enables high-speed trustworthy AI systems.


Q-SAM2: Accurate Quantization for Segment Anything Model 2

Farronato, Nicola, Scheidegger, Florian, Rigotti, Mattia, Malossi, Cristiano, Magno, Michele, Qin, Haotong

arXiv.org Artificial Intelligence

The Segment Anything Model 2 (SAM2) is a powerful foundation model for promptable segmentation. However, its high computational and memory costs are a major barrier to deployment on resource-constrained devices. In this paper, we present Q-SAM2, an accurate low-bit quantization method that achieves high compression and high fidelity. To address performance degradation arising from challenging weight and activation distributions during quantization, Q-SAM2 introduces two novel contributions: Variance-Reduced Calibration (VRC), an initialization method that reduces weight statistical variance by minimizing the Frobenius norm over a small calibration batch; and Learnable Statistical Clipping (LSC), a Quantization-Aware Training (QAT) method that learns momentum-stabilized clipping factors to manage outliers in weights and activations. Comprehensive experiments demonstrate that Q-SAM2 achieves highly accurate inference with substantial efficiency gains, significantly surpassing state-of-the-art general QAT schemes, particularly in the ultra-low 2-bit regime. Specifically, Q-SAM2 achieves an accuracy gain of up to 9.7 ppt in J&F on the video segmentation benchmark and 7.3 ppt in mIoU for instance segmentation over the best competing QAT model, all while achieving an 8x reduction in model size compared to the BF16 baseline.